week 8: multilevel models

multilevel adventures

divergent transitions

From McElreath:

Recall that HMC simulates the frictionless flow of a particle on a surface. In any given transition, which is just a single flick of the particle, the total energy at the start should be equal to the total energy at the end. That’s how energy in a closed system works. And in a purely mathematical system, the energy is always conserved correctly. It’s just a fact about the physics.

But in a numerical system, it might not be. Sometimes the total energy is not the same at the end as it was at the start. In these cases, the energy is divergent. How can this happen? It tends to happen when the posterior distribution is very steep in some region of parameter space. Steep changes in probability are hard for a discrete physics simulation to follow. When that happens, the algorithm notices by comparing the energy at the start to the energy at the end. When they don’t match, it indicates numerical problems exploring that part of the posterior distribution.

centered parameterization

In his lecture, McElreath uses CENTERED PARAMETERIZATION to demonstrate divergent transitions. A very simple example:

\[\begin{align*} x &\sim \text{Normal}(0, exp(\nu)) \\ \nu &\sim \text{Normal}(0, 3) \\ \end{align*}\]

This expression is centered because one set of priors (the priors for \(x\)) are centered around another prior (the prior for \(\nu\)). It’s intuitive, but this can cause a lot of problems with Stan, which is probably why McElreath used this for his example. In short, when there is limited data within our groups or the population variance is small, the parameters \(x\) and \(\nu\) become highly correlated. This geometry is challenging for MCMC to sample. (Think of a long and narrow groove, not a bowl, for your Hamiltonian skateboard.)

Code
set.seed(1)
# plot the likelihoods
ps <- seq( from=-4, to=4, length.out=200) # possible parameter values for both x and nu

crossing(nu = ps, x=ps) %>%  #every possible combination of nu and x
  mutate(
    likelihood_nu = dnorm(nu, 0, 3),
    likelihood_x  = dnorm(x, 0, exp(nu)),
    joint_likelihood = likelihood_nu*likelihood_x
  ) %>% 
  ggplot( aes(x=x, y=nu, fill=joint_likelihood) ) +
  geom_raster() + 
  scale_fill_viridis_c() +
  guides(fill = F)

The way to fix this is by using an uncentered parameterization:

\[\begin{align*} x &= z\times \text{exp}(\nu) \\ z &\sim \text{Normal}(0, 1) \\ \nu &\sim \text{Normal}(0, 3) \\ \end{align*}\]

Code
set.seed(1)
# plot the likelihoods
ps <- seq( from=-4, to=4, length.out=200) # possible parameter values for both x and nu

crossing(nu = ps, z=ps) %>%  #every possible combination of nu and x
  mutate(
    likelihood_nu = dnorm(nu, 0, 3),
    likelihood_z  = dnorm(z, 0, 1),
    joint_likelihood = likelihood_nu*likelihood_z
  ) %>% 
  ggplot( aes(x=z, y=nu, fill=joint_likelihood) ) +
  geom_raster() +
  scale_fill_viridis_c() +
  guides(fill = F)

It’s an important point, except the issues of centered parameterization are so prevalent1, that brms generally doesn’t allow centered parameterization (with some exceptions). So we can’t recreate the divergent transition situation that McElreath demonstrates in his lecture.

McElreath describes the problem of fertility in Bangladesh as such:

\[\begin{align*} C &\sim \text{Bernoulli}(p_i) \\ \text{logit}(p_i) &= \alpha_{D_{[i]}} \\ \alpha_j &\sim \text{Normal}(\bar{\alpha}, \sigma) \\ \bar{\alpha} &\sim \text{Normal}(0, 1) \\ \sigma &\sim \text{Exponential}(1) \\ \end{align*}\]

But to fit this using brms, we’ll rewrite as:

\[\begin{align*} C &\sim \text{Bernoulli}(p_i) \\ \text{logit}(p_i) &= \alpha + \alpha_{D[i]} \\ \alpha &\sim \text{Normal}(0, 1) \\ \alpha_{D[j]} &\sim \text{Normal}(0, \sigma_{D}) \\ \sigma_{D} &\sim \text{Exponential}(1) \end{align*}\]

\[\begin{align*} C &\sim \text{Bernoulli}(p_i) \\ \text{logit}(p_i) &= \alpha + \alpha_{D[i]} \\ \alpha &\sim \text{Normal}(0, 1) \\ \alpha_{D[j]} &\sim \text{Normal}(0, \sigma_{D}) \\ \sigma_{D} &\sim \text{Exponential}(1) \end{align*}\]

data(bangladesh, package="rethinking")
d <- bangladesh

m1 <- brm(
  data=d,
  family=bernoulli,
  use.contraception ~ 1 + (1 | district),
  prior = c( prior(normal(0, 1), class = Intercept), # alpha bar
             prior(exponential(1), class = sd)),       # sigma

  chains=4, cores=4, iter=2000, warmup=1000,
  seed = 1,
  file = here("files/data/generated_data/m71.1"))
m1
 Family: bernoulli 
  Links: mu = logit 
Formula: use.contraception ~ 1 + (1 | district) 
   Data: d (Number of observations: 1934) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~district (Number of levels: 60) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.52      0.09     0.37     0.70 1.00     1374     1915

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept    -0.54      0.09    -0.72    -0.37 1.00     1998     2342

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
gather_draws(m1, b_Intercept, r_district[district, ]) %>% 
  with_groups(c(.variable, district), median_qi, .value)
# A tibble: 61 × 8
# Groups:   .variable, district [61]
   .variable   district  .value .lower  .upper .width .point .interval
   <chr>          <int>   <dbl>  <dbl>   <dbl>  <dbl> <chr>  <chr>    
 1 b_Intercept       NA -0.536  -0.715 -0.369    0.95 median qi       
 2 r_district         1 -0.454  -0.864 -0.0464   0.95 median qi       
 3 r_district         2 -0.0482 -0.757  0.610    0.95 median qi       
 4 r_district         3  0.301  -0.702  1.35     0.95 median qi       
 5 r_district         4  0.343  -0.239  0.964    0.95 median qi       
 6 r_district         5 -0.0297 -0.592  0.510    0.95 median qi       
 7 r_district         6 -0.275  -0.773  0.197    0.95 median qi       
 8 r_district         7 -0.216  -0.945  0.478    0.95 median qi       
 9 r_district         8  0.0236 -0.567  0.603    0.95 median qi       
10 r_district         9 -0.162  -0.866  0.453    0.95 median qi       
# ℹ 51 more rows
Code
gather_draws(m1, b_Intercept, r_district[district, ]) %>% 
  with_groups(c(.variable, district), median_qi, .value) %>% 
  ggplot(aes( x=district, y=.value)) +
  geom_pointinterval( aes(ymin = .lower, ymax = .upper), 
                      alpha=.5) +
  labs(y="District distance from mean") +
  coord_flip()

\[\begin{align*} C &\sim \text{Bernoulli}(p_i) \\ \text{logit}(p_i) &= \alpha + \alpha_{D[i]} + \beta U_i + \beta_{D[i]}U_i \\ \alpha, \beta &\sim \text{Normal}(0, 1) \\ \alpha_{D[j]} &\sim \text{Normal}(0, \sigma_{D}) \\ \beta_{D[j]} &\sim \text{Normal}(0, \tau_{D}) \\ \sigma, \tau &\sim \text{Exponential}(1) \\ \end{align*}\]

m2 <- brm(
  data=d,
  family=bernoulli,
  use.contraception ~ 1 + urban + (1 + urban || district),
  prior = c( prior(normal(0, 1), class = Intercept), 
             prior(normal(0, 1), class = b),
             prior(exponential(1), class = sd)),     

  chains=4, cores=4, iter=2000, warmup=1000,
  seed = 1,
  file = here("files/data/generated_data/m71.2"))

Oops, no divergent transitions.

m2
 Family: bernoulli 
  Links: mu = logit 
Formula: use.contraception ~ 1 + urban + (1 + urban || district) 
   Data: d (Number of observations: 1934) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~district (Number of levels: 60) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.48      0.09     0.32     0.67 1.01     1290     2067
sd(urban)         0.55      0.21     0.11     0.96 1.00      860      912

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept    -0.70      0.09    -0.88    -0.53 1.00     2275     2893
urban         0.63      0.15     0.33     0.92 1.00     2391     2077

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

more about divergent transitions

From Gelman et al (2020)

slopes

Let’s start by simulating the cafe data.

# ---- set population-level parameters -----
a <- 3.5       # average morning wait time
b <- (-1)      # average difference afternoon wait time
sigma_a <- 1   # std dev in intercepts
sigma_b <- 0.5 # std dev in slopes
rho <- (-0.7)  #correlation between intercepts and slopes

# ---- create vector of means ----
Mu <- c(a, b)

# ---- create matrix of variances and covariances ----
sigmas <- c(sigma_a,sigma_b) # standard deviations
Rho <- matrix( c(1,rho,rho,1) , nrow=2 ) # correlation matrix
# now matrix multiply to get covariance matrix
Sigma <- diag(sigmas) %*% Rho %*% diag(sigmas)

# ---- simulate intercepts and slopes -----
N_cafes = 20
library(MASS)
set.seed(5)
vary_effects <- mvrnorm( n=N_cafes, mu = Mu, Sigma=Sigma)
a_cafe <- vary_effects[, 1]
b_cafe <- vary_effects[, 2]

# ---- simulate observations -----

set.seed(22)
N_visits <- 10
afternoon <- rep(0:1,N_visits*N_cafes/2)
cafe_id <- rep( 1:N_cafes , each=N_visits )
mu <- a_cafe[cafe_id] + b_cafe[cafe_id]*afternoon
sigma <- 0.5 # std dev within cafes
wait <- rnorm( N_visits*N_cafes , mu , sigma )
d <- data.frame( cafe=cafe_id , afternoon=afternoon , wait=wait )

a simulation note from RM

In this exercise, we are simulating data from a generative process and then analyzing that data with a model that reflects exactly the correct structure of that process. But in the real world, we’re never so lucky. Instead we are always forced to analyze data with a model that is MISSPECIFIED: The true data-generating process is different than the model. Simulation can be used however to explore misspecification. Just simulate data from a process and then see how a number of models, none of which match exactly the data-generating process, perform. And always remember that Bayesian inference does not depend upon data-generating assumptions, such as the likelihood, being true. Non-Bayesian approaches may depend upon sampling distributions for their inferences, but this is not the case for a Bayesian model. In a Bayesian model, a likelihood is a prior for the data, and inference about parameters can be surprisingly insensitive to its details.

** Mathematical model:**

likelihood function and linear model

\[\begin{align*} W_i &\sim \text{Normal}(\mu_i, \sigma) \\ \mu_i &= \alpha_{CAFE[i]} + \beta_{CAFE[i]}A_i \end{align*}\]

varying intercepts and slopes

\[\begin{align*} \begin{bmatrix} \alpha_{CAFE[i]} \\ \beta_{CAFE[i]} \end{bmatrix} &\sim \text{MVNormal}( \begin{bmatrix} \alpha \\ \beta \end{bmatrix}, \mathbf{S}) \\ \mathbf{S} &\sim \begin{pmatrix} \sigma_{\alpha}, & 0 \\ 0, & \sigma_{\beta}\end{pmatrix}\mathbf{R}\begin{pmatrix} \sigma_{\alpha}, & 0 \\ 0, & \sigma_{\beta}\end{pmatrix} \\ \end{align*}\]

priors

\[\begin{align*} \alpha &\sim \text{Normal}(5,2) \\ \beta &\sim \text{Normal}(-1,0.5) \\ \sigma &\sim \text{Exponential}(1) \\ \sigma_{\alpha} &\sim \text{Exponential}(1) \\ \sigma_{\beta} &\sim \text{Exponential}(1) \\ \mathbf{R} &\sim \text{LKJcorr}(2) \end{align*}\]